Goto

Collaborating Authors

 robotic soccer


Multi-Agent Reinforcement Learning and Real-Time Decision-Making in Robotic Soccer for Virtual Environments

Taourirte, Aya, Mia, Md Sohag

arXiv.org Artificial Intelligence

The deployment of multi-agent systems in dynamic, adversarial environments like robotic soccer necessitates real-time decision-making, sophisticated cooperation, and scalable algorithms to avoid the curse of dimensionality. While Reinforcement Learning (RL) offers a promising framework, existing methods often struggle with the multi-granularity of tasks (long-term strategy vs. instant actions) and the complexity of large-scale agent interactions. This paper presents a unified Multi-Agent Reinforcement Learning (MARL) framework that addresses these challenges. First, we establish a baseline using Proximal Policy Optimization (PPO) within a client-server architecture for real-time action scheduling, with PPO demonstrating superior performance (4.32 avg. goals, 82.9% ball control). Second, we introduce a Hierarchical RL (HRL) structure based on the options framework to decompose the problem into a high-level trajectory planning layer (modeled as a Semi-Markov Decision Process) and a low-level action execution layer, improving global strategy (avg. goals increased to 5.26). Finally, to ensure scalability, we integrate mean-field theory into the HRL framework, simplifying many-agent interactions into a single agent vs. the population average. Our mean-field actor-critic method achieves a significant performance boost (5.93 avg. goals, 89.1% ball control, 92.3% passing accuracy) and enhanced training stability. Extensive simulations of 4v4 matches in the Webots environment validate our approach, demonstrating its potential for robust, scalable, and cooperative behavior in complex multi-agent domains.


Generative World Models of Tasks: LLM-Driven Hierarchical Scaffolding for Embodied Agents

Hill, Brennen

arXiv.org Artificial Intelligence

Recent advances in agent development have focused on scaling model size and raw interaction data, mirroring successes in large language models. However, for complex, long-horizon multi-agent tasks such as robotic soccer, this end-to-end approach often fails due to intractable exploration spaces and sparse rewards. We propose that an effective world model for decision-making must model the world's physics and also its task semantics. A systematic review of 2024 research in low-resource multi-agent soccer reveals a clear trend towards integrating symbolic and hierarchical methods, such as Hierarchical Task Networks (HTNs) and Bayesian Strategy Networks (BSNs), with multi-agent reinforcement learning (MARL). These methods decompose complex goals into manageable subgoals, creating an intrinsic curriculum that shapes agent learning. We formalize this trend into a framework for Hierarchical Task Environments (HTEs), which are essential for bridging the gap between simple, reactive behaviors and sophisticated, strategic team play. Our framework incorporates the use of Large Language Models (LLMs) as generative world models of tasks, capable of dynamically generating this scaffolding. We argue that HTEs provide a mechanism to guide exploration, generate meaningful learning signals, and train agents to internalize hierarchical structure, enabling the development of more capable and general-purpose agents with greater sample efficiency than purely end-to-end approaches.


Beating a Defender in Robotic Soccer: Memory-Based Learning of a Continuous Function

Neural Information Processing Systems

Learning how to adjust to an opponent's position is critical to the success of having intelligent agents collaborating towards the achievement of specific tasks in unfriendly environments. This pa(cid:173) per describes our work on a Memory-based technique for to choose an action based on a continuous-valued state attribute indicating the position of an opponent. We investigate the question of how an agent performs in nondeterministic variations of the training situ(cid:173) ations. Our experiments indicate that when the random variations fall within some bound of the initial training, the agent performs better with some initial training rather than from a tabula-rasa.


The CS Freiburg Team

AI Magazine

Robotic soccer is an ideal task to demonstrate new techniques and explore new problems. Moreover, problems and solutions can easily be communicated because soccer is a well-known game. Our intention in building a robotic soccer team and participating in RoboCup-98 was, first, to demonstrate the usefulness of the self-localization methods we have developed. Second, we wanted to show that playing soccer based on an explicit world model is much more effective than other methods. Third, we intended to explore the problem of building and maintaining a global team world model.


1393

AI Magazine

Sony has provided a robot platform for research and development in physical agents, namely, fully autonomous legged robots. In this article, we describe our work using Sony's legged robots to participate at the RoboCup-98 legged robot demonstration and competition. Robotic soccer represents a challenging environment for research in systems with multiple robots that need to achieve concrete objectives, particularly in the presence of an adversary. Furthermore, RoboCup offers an excellent opportunity for robot entertainment. We introduce the RoboCup context and briefly present Sony's legged robot.


1294

AI Magazine

Robotic soccer is a challenging research domain that involves multiple agents that need to collaborate in an adversarial environment to achieve specific objectives. Robotic soccer is an example of such complex tasks for which multiple agents need to collaborate in an adversarial environment to achieve specific objectives. Robotic soccer offers a challenging research domain to investigate a large spectrum of issues relevant to the development of complete autonomous agents (Asada et al. 1998; Kitano, Tambe, et al. 1997). The fast-paced nature of the domain necessitates real-time sensing coupled with quick behaving and decision making. The behaviors and decision-making processes can range from the most simple reactive behaviors, such as moving directly toward the ball, to arbitrarily complex reasoning procedures that take into account the actions and perceived strategies of teammates and opponents.


Robotic soccer during RoboCup Asia-Pacific 2017

USATODAY - Tech Top Stories

More than 1,000 students from 25 countries with more than 130 team will participate in the four-day contest of the region's first robot competition. The event is held to encourage global robotics research and development.

  AI-Alerts: 2017 > 2017-12 > AAAI AI-Alert for Dec 19, 2017 (1.00)
  Country:
  Industry: Leisure & Entertainment > Sports > Soccer (1.00)

Soccer-playing robots eye their own world cup The Japan Times

AITopics Original Links

WASHINGTON – When robots play soccer, it looks like a game played by 5-year-olds: they swarm around the ball, kick haphazardly and fall down a lot. However, robot teams have made strides in recent years, and some researchers believe the humanoids could challenge the world's best players in a decade or two. "Maybe in 20 years we could develop a team of robots to play against the best World Cup teams," said Daniel Lee, who heads the University of Pennsylvania robotics lab, which is seeking a fourth consecutive RoboCup in Brazil this month, the premiere event for robotic soccer. Robotic soccer, says Lee, is more than fun and games. It involves artificial intelligence and complex algorithms that help provide a better understanding of human vision, cognition and mobility.


The CS Freiburg Team: Playing Robotic Soccer Based on an Explicit World Model

Gutmann, Jens-Steffen, Hatzack, Wolfgang, Herrmann, Immanuel, Nebel, Bernhard, Rittinger, Frank, Topor, Augustinus, Weigel, Thilo

AI Magazine

Robotic soccer is an ideal task to demonstrate new techniques and explore new problems. Our intention in building a robotic soccer team and participating in RoboCup-98 was, first, to demonstrate the usefulness of the self-localization methods we have developed. Second, we wanted to show that playing soccer based on an explicit world model is much more effective than other methods. Third, we intended to explore the problem of building and maintaining a global team world model.


The CS Freiburg Team: Playing Robotic Soccer Based on an Explicit World Model

Gutmann, Jens-Steffen, Hatzack, Wolfgang, Herrmann, Immanuel, Nebel, Bernhard, Rittinger, Frank, Topor, Augustinus, Weigel, Thilo

AI Magazine

Robotic soccer is an ideal task to demonstrate new techniques and explore new problems. Moreover, problems and solutions can easily be communicated because soccer is a well-known game. Our intention in building a robotic soccer team and participating in RoboCup-98 was, first, to demonstrate the usefulness of the self-localization methods we have developed. Second, we wanted to show that playing soccer based on an explicit world model is much more effective than other methods. Third, we intended to explore the problem of building and maintaining a global team world model. As has been demonstrated by the performance of our team, we were successful with the first two points. Moreover, robotic soccer gave us the opportunity to study problems in distributed, cooperative sensing.